Skip to main content

Change of Basis

There are many cases in quantum mechanics where we need to change the basis of a vector from one basis to another. For example, in our discussion of spin states, we can use and as the basis vectors, but we can also use and as the basis vectors.

The change of basis is very important in many areas of physics. For instance, the entirety of special relativity just boils down to changing the basis of spacetime vectors. We now explore some mathematical concepts that will help us understand how to change the basis of a vector.

Table of Contents

The Transformation Matrix

Suppose and are two sets of orthonormal basis vectors. In order to transform one basis to another, we need to find an operator that can transform the basis vectors.

Denote this operator as , such that . In order to find out what this operator is, we can use a neat trick using the outer product.

Consider the following: . When , this is just . When , this is . Therefore, we can write as a sum of all of these outer products:

Unitarity

Previously, we have seen that a Hermitian operator satisfies . Let's see what happens when we take the adjoint of , remembering the rules of adjoints:

Notice what happens when we multiply and :

Since ensures that the sum is only nonzero when , we can remove one of the sums:

where we have used the completeness relation . Therefore, . We call such an operator an unitary operator. Unitary operators will be very important when we discuss the time-evolution of quantum states.

We can summarize what we have found in the following theorem:

Given two sets of orthonormal and complete basis vectors and , there always exists a unitary operator that transforms one basis to another such that .

Matrix Representation

As always, we can represent operators as matrices, and the change of basis operator is no exception. Recall that the -th element of the matrix representation of an operator is . Therefore, the matrix representation of is:

It is insightful to see that this is the same as the transformation matrix we have seen in linear algebra:

where and are two sets of orthonormal basis vectors. In the case of curvilinear coordinates, we can use a combination of the metric tensor and the Jacobian to find the transformation matrix.

Components of Vectors Under Change of Basis

The next question we might ask is: how do the components of a vector change under a change of basis? If one is familiar with tensors, one might recall that vectors have contravariant components; the components transform in the opposite way to the basis vectors.

It is easy to see why this is the case. I borrow from Eigenchris on YouTube for this explanation. Consider a vector with components , , and in the standard basis , , and . Then, can be written as .

The key insight is that we can write this as the product of a row vector and a column vector, where the row vector contains the basis vectors and the column vector contains the components:

Next, suppose a new basis , , and is given and represented by some matrix . Then, we can write the new basis row vector as the following:

We want to be able to write in terms of the new basis:

The key insight is that we can start from the original expansion and add anything in the middle that is equal to the identity matrix. For instance, we can write :

As such, we can see that:

Going back to quantum mechanics, the principle is the same. We can write the components of a vector in the new basis as the inverse of the transformation matrix times the components in the old basis. But because the transformation matrix is unitary, that is the same as taking the adjoint of the transformation matrix. Then:

The fact that basis vectors are covariant (transform with the forward transformation ) and components are contravariant (transform with the inverse transformation ) is why the vector itself is invariant under a change of basis. Intuitively, it makes sense that the components and basis "cancel" each other out.